2 research outputs found

    Human object annotation for surveillance video forensics

    Get PDF
    A system that can automatically annotate surveillance video in a manner useful for locating a person with a given description of clothing is presented. Each human is annotated based on two appearance features: primary colors of clothes and the presence of text/logos on clothes. The annotation occurs after a robust foreground extraction stage employing a modified Gaussian mixture model-based approach. The proposed pipeline consists of a preprocessing stage where color appearance of an image is improved using a color constancy algorithm. In order to annotate color information for human clothes, we use the color histogram feature in HSV space and find local maxima to extract dominant colors for different parts of a segmented human object. To detect text/logos on clothes, we begin with the extraction of connected components of enhanced horizontal, vertical, and diagonal edges in the frames. These candidate regions are classified as text or nontext on the basis of their local energy-based shape histogram features. Further, to detect humans, a novel technique has been proposed that uses contourlet transform-based local binary pattern (CLBP) features. In the proposed method, we extract the uniform direction invariant LBP feature descriptor for contourlet transformed high-pass subimages from vertical and diagonal directional bands. In the final stage, extracted CLBP descriptors are classified by a trained support vector machine. Experimental results illustrate the superiority of our method on large-scale surveillance video data

    Carried object detection in videos using color information

    Get PDF
    Automatic baggage detection has become a subject of significant practical interest in recent years. In this paper, we propose an approach to baggage detection in CCTV video footage that uses color information to address some of the vital shortcomings of state-of-the-art algorithms. The proposed approach consists of typical steps used in baggage detection, namely, the estimation of moving direction of humans carrying baggage, construction of human-like temporal templates, and their alignment with the best matched view-specific exemplars. In addition, we utilize the color information to define the region that most likely belongs to a human torso in order to reduce the false positive detections. A key novel contribution is the person’s viewing direction estimation using machine learning and shoulder shape related features. Further enhancement of baggage detection and segmentation is achieved by exploiting the CIELAB color space properties. The proposed system has been extensively tested for its effectiveness, at each stage of improvement, on PETS 2006 dataset and additional CCTVvideo footage captured to cover specific test scenarios. The experimental results suggest that the proposed algorithm is capable of superseding the functional performance of state-of-the-art baggage detection algorithms
    corecore